8 research outputs found
Ensemble CNN Networks for GBM Tumors Segmentation Using Multi-parametric MRI
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworks namely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at https://hub.docker.com/r/razeineldin/deepseg21
Self-supervised iRegNet for the Registration of Longitudinal Brain MRI of Diffuse Glioma Patients
Reliable and accurate registration of patient-specific brain magnetic
resonance imaging (MRI) scans containing pathologies is challenging due to
tissue appearance changes. This paper describes our contribution to the
Registration of the longitudinal brain MRI task of the Brain Tumor Sequence
Registration Challenge 2022 (BraTS-Reg 2022). We developed an enhanced
unsupervised learning-based method that extends the iRegNet. In particular,
incorporating an unsupervised learning-based paradigm as well as several minor
modifications to the network pipeline, allows the enhanced iRegNet method to
achieve respectable results. Experimental findings show that the enhanced
self-supervised model is able to improve the initial mean median registration
absolute error (MAE) from 8.20 (7.62) mm to the lowest value of 3.51 (3.50) for
the training set while achieving an MAE of 2.93 (1.63) mm for the validation
set. Additional qualitative validation of this study was conducted through
overlaying pre-post MRI pairs before and after the de-formable registration.
The proposed method scored 5th place during the testing phase of the MICCAI
BraTS-Reg 2022 challenge. The docker image to reproduce our BraTS-Reg
submission results will be publicly available.Comment: Accepted in the MICCAI BraTS-Reg 2022 Challenge (as part of the
BrainLes workshop proceedings distributed by Springer LNCS
DeepSeg: Deep Neural Network Framework for Automatic Brain Tumor Segmentation using Magnetic Resonance FLAIR Images
Purpose: Gliomas are the most common and aggressive type of brain tumors due
to their infiltrative nature and rapid progression. The process of
distinguishing tumor boundaries from healthy cells is still a challenging task
in the clinical routine. Fluid-Attenuated Inversion Recovery (FLAIR) MRI
modality can provide the physician with information about tumor infiltration.
Therefore, this paper proposes a new generic deep learning architecture; namely
DeepSeg for fully automated detection and segmentation of the brain lesion
using FLAIR MRI data.
Methods: The developed DeepSeg is a modular decoupling framework. It consists
of two connected core parts based on an encoding and decoding relationship. The
encoder part is a convolutional neural network (CNN) responsible for spatial
information extraction. The resulting semantic map is inserted into the decoder
part to get the full resolution probability map. Based on modified U-Net
architecture, different CNN models such as Residual Neural Network (ResNet),
Dense Convolutional Network (DenseNet), and NASNet have been utilized in this
study.
Results: The proposed deep learning architectures have been successfully
tested and evaluated on-line based on MRI datasets of Brain Tumor Segmentation
(BraTS 2019) challenge, including s336 cases as training data and 125 cases for
validation data. The dice and Hausdorff distance scores of obtained
segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly.
Conclusion: This study showed successful feasibility and comparative
performance of applying different deep learning models in a new DeepSeg
framework for automated brain tumor segmentation in FLAIR MR images. The
proposed DeepSeg is open-source and freely available at
https://github.com/razeineldin/DeepSeg/.Comment: Accepted to International Journal of Computer Assisted Radiology and
Surger
Slicer-DeepSeg: Open-Source Deep Learning Toolkit for Brain Tumour Segmentation
Purpose
Computerized medical imaging processing assists neurosurgeons to localize tumours precisely. It plays a key role in recent image-guided neurosurgery. Hence, we developed a new open-source toolkit, namely Slicer-DeepSeg, for efficient and automatic brain tumour segmentation based on deep learning methodologies for aiding clinical brain research.
Methods
Our developed toolkit consists of three main components. First, Slicer-DeepSeg extends the 3D Slicer application and thus provides support for multiple data input/ output data formats and 3D visualization libraries. Second, Slicer core modules offer powerful image processing and analysis utilities. Third, the Slicer-DeepSeg extension provides a customized GUI for brain tumour segmentation using deep learning-based methods.
Results
The developed Slicer-DeepSeg was validated using a public dataset of high-grade glioma patients. The results showed that our proposed platform’s performance considerably outperforms other 3D Slicer cloud-based approaches.
Conclusions
Developed Slicer-DeepSeg allows the development of novel AI-assisted medical applications in neurosurgery. Moreover, it can enhance the outcomes of computer-aided diagnosis of brain tumours. Open-source Slicer-DeepSeg is available at github.com/razeineldin/Slicer-DeepSeg
Deep automatic segmentation of brain tumours in interventional ultrasound data
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS.These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room
Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
Intraoperative brain deformation, so-called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery